library(tidyverse)
library(dslabs)
library(Lahman)
library(HistData)
library(broom)
library(gridExtra)
Linear regression is commonly used to quantify the relationship between two or more variables. It is also used to adjust for confounding. In this course, we cover how to implement linear regression and adjust for confounding in practice using R.
The class notes for this course series can be found in Professor Irizarry’s freely available Introduction to Data Science book.
Course overview
There are three major sections in this course: introduction to linear regression, linear models, and confounding.
1. Introduction to Linear Regression
In this section, you’ll learn the basics of linear regression through this course’s motivating example, the data-driven approach used to construct baseball teams. You’ll also learn about correlation, the correlation coefficient, stratification, and the variance explained.
2. Linear Models
In this section, you’ll learn about linear models. You’ll learn about least squares estimates, multivariate regression, and several useful features of R, such as tibbles, lm, do, and broom. You’ll learn how to apply regression to baseball to build a better offensive metric.
3. Confounding
In the final section of the course, you’ll learn about confounding and several reasons that correlation is not the same as causation, such as spurious correlation, outliers, reversing cause and effect, and confounders. You’ll also learn about Simpson’s Paradox.
In the Introduction to Regression section, you will learn the basics of linear regression.
After completing this section, you will be able to:
This section has three parts: Baseball as a Motivating Example, Correlation, and Stratification and Variance Explained. There are comprehension checks that follow most videos.
As motivation for this course, we’ll go back to 2002 and try to build a baseball team with a limited budget. Note that in 2002, the Yankees payroll was almost $ 130 million, and had more than tripled the Oakland A’s $ 40 million budget. Statistics have been used in baseball since its beginnings. Note that the data set we will be using, included in the Lahman Library, goes back to the 19th century. For example, a summary of statistics we will describe soon, the batting average, has been used to summarize a batter’s success for decades. Other statistics such as home runs, runs batted in, and stolen bases, we’ll describe all this soon, are reported for each player in the game summaries included in the sports section of newspapers. And players are rewarded for high numbers. Although summary statistics were widely used in baseball, data analysis per se was not. These statistics were arbitrarily decided on without much thought as to whether they actually predicted, or were related to helping a team win. This all changed with Bill James. In the late 1970s, this aspiring writer and baseball fan started publishing articles describing more in-depth analysis of baseball data. He named the approach of using data to predict what outcomes best predict if a team wins sabermetrics. Until Billy Beane made sabermetrics the center of his baseball operations, Bill James’ work was mostly ignored by the baseball world. Today, pretty much every team uses the approach, and it has gone beyond baseball into other sports. In this course, to simplify the example we use, we’ll focus on predicting scoring runs. We will ignore pitching and fielding, although those are important as well. We will see how regression analysis can help develop strategies to build a competitive baseball team with a constrained budget.
The approach can be divided into two separate data analyses.
In the first, we determine which recorded player specific statistics predict runs.
In the second, we examine if players were undervalued based on what our first analysis predicts.
What is the application of statistics and data science to baseball called?
We actually don’t need to understand all the details about the game of baseball, which has over 100 rules, to see how regression will help us find undervalued players. Here, we distill the sport to the basic knowledge one needs to know to effectively attack the data science challenge. Let’s get started. The goal of a baseball game is to score more runs, they’re like points, than the other team. Each team has nine batters that bat in a predetermined order. After the ninth batter hits, we start with the first again. Each time they come to bat, we call it a plate appearance, PA. At each plate appearance, the other team’s pitcher throws the ball and you try to hit it. The plate appearance ends with a binary outcome– you either make an out, that’s a failure and sit back down, or you don’t, that’s a success and you get to run around the bases and potentially score a run. Each team gets nine tries, referred to as innings, to score runs. Each inning ends after three outs, after you’ve failed three times. From these examples, we see how luck is involved in the process. When you bat you want to hit the ball hard. If you hit it hard enough, it’s a home run, the best possible outcome as you get at least one automatic run. But sometimes, due to chance, you hit the ball very hard and a defender catches it, which makes it an out, a failure. In contrast, sometimes you hit the ball softly but it lands just in the right place. You get a hit which is a success. The fact that there is chance involved hints at why probability models will be involved in all this. Now there are several ways to succeed. Understanding this distinction will be important for our analysis. When you hit the ball you want to pass as many bases as possible. There are four bases with the fourth one called home plate. Home plate is where you start, where you try to hit. So the bases form a cycle. If you get home, you score a run. We’re simplifying a bit. But there are five ways you can succeed. In other words, not making an out.
5 ways to succeed:
- base on balls (BB)
- single
- double (X2B)
- triple (X3B)
- home run (HR)
First one is called a base on balls. This is when the pitcher does not pitch well and you get to go to first base. A single is when you hit the ball and you get to first base. A double is when you hit the ball and you go past first base to second. Triple is when you do that but get to third. And a home run is when you hit the ball and go all the way home and score a run. If you get to a base, you still have a chance of getting home and scoring a run if the next batter hits successfully. While you are on base, you can also try to steal a base. If you run fast enough, you can try to go from first to second or from second to third without the other team tagging you. All right. Now historically, the batting average has been considered the most important offensive statistic. To define this average, we define a hit and an at bat. Singles, doubles, triples, and home runs are hits. But remember, there’s a fifth way to be successful, the base on balls. That is not a hit. An at bat is the number of times you either get a hit or make an out, bases on balls are excluded.
\[\textit{batting average} = \frac{H}{AB}\]
The batting average is simply hits divided by at bats. And it is considered the main measure of a success rate. Today, in today’s game, this success rates ranges from player to player from about 20% to 38%. We refer to the batting average in thousands. So for example, if your success rate is 25% we say you’re batting 250. One of Bill James’ first important insights is that the batting average ignores bases on balls but bases on balls is a success. So a player that gets many more bases on balls than the average player might not be recognized if he does not excel in batting average. But is this player not helping produce runs? No award is given to the player with the most bases on balls. In contrast, the total number of stolen bases are considered important and an award is given out to the player with the most. But players with high totals of stolen bases also make outs as they do not always succeed. So does a player with a high stolen base total help produce runs? Can we use data size to determine if it’s better to pay for bases on balls or stolen bases? One of the challenges in this analysis is that it is not obvious how to determine if a player produces runs because so much depends on his teammates. We do keep track of the number of runs scored by our player. But note that if you hit after someone who hits many home runs, you will score many runs. But these runs don’t necessarily happen if we hire this player but not his home run hitting teammate. However, we can examine team level statistics. How do teams with many stolen bases compare to teams with few? How about bases on balls? We have data. Let’s examine some.
Which of the following outcome is not included in the batting average?
Why do we consider team statistics as well as individual statistics?
Let’s start looking at some baseball data and try to answer your questions using these data. First one, do teams that hit more home runs score more runs? We know what the answer to this will be, but let’s look at the data anyways. We’re going to examine data from 1961 to 2001. We end at 2001 because, remember, we’re back in 2002, getting ready to build a team. We started in 1961, because that year, the league changed from 154 games to 162 games. The visualization of choice when exploring the relationship between two variables like home runs and runs is a scatterplot.
data("Teams")
ds_theme_set()
Teams %>%
filter(yearID %in% 1961:2001) %>%
mutate(HR_per_game = HR / G, R_per_game = R / G) %>%
ggplot(aes(HR_per_game, R_per_game)) +
geom_point(alpha = 0.5)
The following code shows you how to make that scatterplot. We start by loading the Lahman library that has all these baseball statistics. And then we simply make a scatterplot using ggplot. Here’s a plot of runs per game versus home runs per game. The plot shows a very strong association– teams with more home runs tended to score more runs.
Teams %>%
filter(yearID %in% 1961:2001) %>%
mutate(SB_per_game = SB / G, R_per_game = R / G) %>%
ggplot(aes(SB_per_game, R_per_game)) +
geom_point(alpha = 0.5)
Now, let’s examine the relationship between stolen bases and wins. Here are the runs per game plotted against stolen bases per game. Here, the relationship is not as clear.
Teams %>%
filter(yearID %in% 1961:2001) %>%
mutate(BB_per_game = BB / G, R_per_game = R / G) %>%
ggplot(aes(BB_per_game, R_per_game)) +
geom_point(alpha = 0.5)
Finally, let’s examine the relationship between bases on balls and runs. Here are runs per game versus bases on balls per game. Although the relationship is not as strong as it was for home runs, we do see a pretty strong relationship here. We know that, by definition, home runs cause runs, because when you hit a home run, at least one run will score. Now it could be that home runs also cause the bases on balls. If you understand the game, you will agree with me that that could be the case. So it might appear that a base on ball is causing runs, when in fact, it’s home runs that’s causing both. This is called confounding. An important concept you will learn about. Linear regression will help us parse all this out and quantify the associations. This will then help us determine what players to recruit. Specifically, we will try to predict things like how many more runs will the team score if we increase the number of bases on balls but keep the home runs fixed. Regression will help us answer this question, as well.
You want to know whether teams with more at-bats per game have more runs per game. What R code below correctly makes a scatterplot for this relationship
Teams %>% filter(yearID %in% 1961:2001) %>%
ggplot(aes(AB, R)) +
geom_point(alpha = 0.5)
Teams %>% filter(yearID %in% 1961:2001) %>%
mutate(AB_per_game = AB / G, R_per_game = R / G) %>%
ggplot(aes(AB_per_game, R_per_game)) +
geom_point(alpha = 0.5)
Teams %>% filter(yearID %in% 1961:2001) %>%
mutate(AB_per_game = AB / G, R_per_game = R / G) %>%
ggplot(aes(AB_per_game, R_per_game)) +
geom_line()
Teams %>% filter(yearID %in% 1961:2001) %>%
mutate(AB_per_game = AB / G, R_per_game = R / G) %>%
ggplot(aes(R_per_game, AB_per_game)) +
geom_point()
Answer is B
What does the variable ‘SOA’ stand for in the Teams table? Hint: make sure to use the help file (?Teams)
Up to now in this series, we have focused mainly on univariate variables. However, in data science application it is very common to be interested in the relationship between two or more variables. We saw this in our baseball example in which we were interested in the relationship, for example, between bases on balls and runs. We’ll come back to this example, but we introduce the concepts of correlation and regression using a simpler example. It is actually the dataset from which regression was born. We examine an example from genetics. Francis Galton studied the variation and heredity of human traits. Among many other traits, Galton collected and studied height data from families to try to understand heredity. While doing this, he developed the concepts of correlation and regression, and a connection to pairs of data that follow a normal distribution. Note that, at the time this data was collected, what we know today about genetics was not yet understood. A very specific question Galton tried to answer was, how much of a son’s height can I predict with the parents height. Note that this is similar to predicting runs with bases on balls. We have access to Galton’s family data through the HistData package. HistData stands for historical data.
galton_heights <- GaltonFamilies %>%
filter(childNum == 1 & gender == 'male') %>%
select(father, childHeight) %>%
rename(son = childHeight)
We’ll create a data set with the heights of fathers and the first sons. The actual data Galton used to discover and define regression. So we have the father and son height data. Suppose we were to summarize these data. Since both distributions are well approximated by normal distributions, we can use the two averages and two standard deviations as summaries.
galton_heights %>%
summarize(mean(father), sd(father), mean(son), sd(son))
## mean(father) sd(father) mean(son) sd(son)
## 1 69.09888 2.546555 70.45475 2.557061
Here they are. You can see the average heights for fathers is 69 inches. The standard deviation is 2.54. For sons, they’re a little taller, because it’s the next generation. The average height is 70.45 inches, and the standard deviation is 2.55 inches.
galton_heights %>%
ggplot(aes(father, son)) +
geom_point(alpha = 0.5)
However, this summary fails to describe a very important characteristic of the data that you can see in this figure. The trend that the taller the father, the taller the son, is not described by the summary statistics of the average and the standard deviation. We will learn that the correlation coefficient is a summary of this trend.
While studying heredity, Francis Galton developed what important statistical concept?
The correlation coefficient is a summary of what?
The correlation coefficient is defined for a list of pairs – \((x_1, y_1),...,(x_n, y_n)\) – with the following formula: \[\rho = \frac{1}{n} \sum_{i=1}^{n} \Bigg(\frac{x_i-\mu_x}{\sigma_x}\Bigg) * \Bigg(\frac{y_i-\mu_y}{\sigma_y}\Bigg)\]
Here, \(\mu_x\) and \(\mu_y\) are the averages of x and y, respectively. And \(\sigma_x\) and \(\sigma_y\) are the standard deviations. The Greek letter rho is commonly used in the statistics book to denote this correlation. The reason is that rho is the Greek letter for r, the first letter of the word regression. Soon, we will learn about the connection between correlation and regression. To understand why this equation does, in fact, summarize how two variables move together, consider the i-th entry of x is \(x_i\) minus \({\mu_x}\) divided by \(\sigma_x\) SDs away from the average. \[\bigg( \frac{x_i - \mu_x}{\sigma_x} \bigg) \]
Similarly, the \(y_i\) – which is paired with the \(x_i\)– is \(y_i\) minus \(\mu_y\) divided by \(\sigma_y\) SDs away from the average y. \[\bigg( \frac{y_i - \mu_y}{\sigma_y} \bigg) \]
The average (mean): \[\mu_x = \frac{1}{n} \sum_{i=1}^n x_i=\frac{1}{n}\big(x_1+x_2+x_3+...+x_n\big) \]
The standard deviation (sd): \[\sigma_x = \frac{1}{n} \sum_{i=1}^n \sqrt{\frac{(x-\mu_x)^2}{n-1}}\]
The variance: \[ variance = 2 *\sigma \]
If x and y are unrelated, then the product of these two quantities will be positive. That happens when they are both positive or when they are both negative as often as they will be negative. That happens when one is positive and the other is negative, or the other way around. One is negative and the other one is positive. This will average to about 0. The correlation is this average. And therefore, unrelated variables will have a correlation of about 0. If instead the quantities vary together, then we are averaging mostly positive products. Because they’re going to be either positive times positive or negative times negative. And we get a positive correlation. If they vary in opposite directions, we get a negative correlation. Another thing to know is that we can show mathematically that the correlation is always between negative 1 and 1. To see this, consider that we can have higher correlation than when we compare a list to itself. That would be perfect correlation.
\[\rho = \frac{1}{n} \sum_{i=1}^{n} \Bigg(\frac{x_i-\mu_x}{\sigma_x}\Bigg)^2 = \frac{1}{\sigma_x^2} \frac{1}{n} \sum_{i=1}^{n} (x_i-\mu_x)^2 = 1\]
In this case, the correlation is given by this equation, which we can show is equal to 1. A similar argument with x and its exact opposite, negative x, proves that the correlation has to be greater or equal to negative 1. So it’s between minus 1 and 1.
galton_heights <- GaltonFamilies %>%
filter(childNum == 1 & gender == 'male') %>%
select(father, childHeight) %>%
rename(son = childHeight)
galton_heights %>% summarize(cor(father, son))
## cor(father, son)
## 1 0.5007248
The correlation between father and sons’ height is about 0.5. You can compute that using this code. We saw what the data looks like when the correlation is 0.5.
To see what data looks like for other values of rho, here are six examples of pairs with correlations ranging from negative 0.9 to 0.99. When the correlation is negative, we see that they go in opposite direction. As x increases, y decreases. When the correlation gets either closer to 1 or negative 1, we see the clot of points getting thinner and thinner. When the correlation is 0, we just see a big circle of points.
Below is a scatterplot showing the relationship between two variables, x and y.
From this figure, the correlation between x and y appears to be about:
Before we continue describing regression, let’s go over a reminder about random variability. In most data science applications, we do not observe the population, but rather a sample. As with the average and standard deviation, the sample correlation is the most commonly used estimate of the population correlation. This implies that the correlation we compute and use as a summary is a random variable. As an illustration, let’s assume that the 179 pairs of fathers and sons is our entire population. A less fortunate geneticist can only afford to take a random sample of 25 pairs.
set.seed(0)
R <- sample_n(galton_heights, 25, replace = TRUE) %>%
summarize(cor(father, son))
R
## cor(father, son)
## 1 0.6687255
The sample correlation for this random sample can be computed using this code. Here, the variable R is the random variable.
B <- 1000
N <- 25
R <- replicate(B,{
R <- sample_n(galton_heights, N, replace = TRUE) %>%
summarize(r = cor(father, son)) %>% .$r
})
data.frame(R) %>%
ggplot(aes(R)) +
geom_histogram(binwidth = 0.05, color ="black")
mean(R)
## [1] 0.5005559
sd(R)
## [1] 0.1472816
We can run a Monte-Carlo simulation to see the distribution of this random variable. Here, we recreate R 1000 times, and plot its histogram. We see that the expected value is the population correlation, the mean of these Rs is 0.5, and that it has a relatively high standard error relative to its size, SD 0.147. This is something to keep in mind when interpreting correlations. It is a random variable, and it can have a pretty large standard error. Also note that because the sample correlation is an average of independent draws, the Central Limit Theorem actually applies.
\[ R \sim N \bigg(\rho,\sqrt{\frac{1-r^2}{N-2}} \bigg) \]
Therefore, for a large enough sample size N, the distribution of these Rs is approximately normal. The expected value we know is the population correlation. The standard deviation is somewhat more complex to derive, but this is the actual formula here.
qqnorm(R); qqline(R)
In our example, N equals to 25, does not appear to be large enough to make the approximation a good one, as we see in this qq-plot.
Instead of running a Monte Carlo simulation with a sample size of 25 from our 179 father-son pairs, we now run our simulation with a sample size of 50.
Would you expect the mean of our sample correlation to increase, decrease, or stay approximately the same?
Instead of running a Monte Carlo simulation with a sample size of 25 from our 179 father-son pairs, we now run our simulation with a sample size of 50.
Would you expect the standard deviation of our sample correlation to increase, decrease, or stay approximately the same?
Correlation is not always a good summary of the relationship between two variables.
A famous example used to illustrate this are the following for artificial data sets, referred to as Anscombe’s quartet. All of these pairs have a correlation of 0.82. Correlation is only meaningful in a particular context. To help us understand when it is that correlation is meaningful as a summary statistic, we’ll try to predict the son’s height using the father’s height. This will help motivate and define linear regression. We start by demonstrating how correlation can be useful for prediction. Suppose we are asked to guess the height of a randomly selected son. Because of the distribution of the son height is approximately normal, we know that the average height of 70.5 inches is a value with the highest proportion and would be the prediction with the chances of minimizing the error. But what if we are told that the father is 72 inches? Do we still guess 70.5 inches for the son? The father is taller than average, specifically he is 1.14 standard deviations taller than the average father. So shall we predict that the son is also 1.14 standard deviations taller than the average son? It turns out that this would be an overestimate. To see this, we look at all the sons with fathers who are about 72 inches. We do this by stratifying the father’s side. We call this a conditional average, since we are computing the average son height conditioned on the father being 72 inches tall. A challenge when using this approach in practice is that we don’t have many fathers that are exactly 72. In our data set, we only have eight. If we change the number to 72.5, we would only have one father who is that height. This would result in averages with large standard errors, and they won’t be useful for prediction for this reason. But for now, what we’ll do is we’ll take an approach of creating strata of fathers with very similar heights. Specifically, we will round fathers’ heights to the nearest inch. This gives us the following prediction for the son of a father that is approximately 72 inches tall. We can use this code and get our answer, which is 71.84. This is 0.54 standard deviations larger than the average son, a smaller number than the 1.14 standard deviations taller that the father was above the average father. Stratification followed by box plots lets us see the distribution of each group. Here is that plot. We can see that the centers of these groups are increasing with height, not surprisingly. The means of each group appear to follow a linear relationship. We can make that plot like this, with this code. See the plot and notice that this appears to follow a line. The slope of this line appears to be about 0.5, which happens to be the correlation between father and son heights. This is not a coincidence. To see this connection, let’s plot the standardized heights against each other, son versus father, with a line that has a slope equal to the correlation. Here’s the code. Here’s a plot. This line is what we call the regression line. In a later video, we will describe Galton’s theoretical justification for using this line to estimate conditional means. Here, we define it and compute it for the data at hand. The regression line for two variables, x and y, tells us that for every standard deviation \(\sigma_x\) increase above the average \(\mu_x\). For x, y grows \(\rho\) standard deviations \(\sigma_y\) above the average \(\mu_y\).
\[\bigg( \frac{y_i-\mu_y}{\sigma_y} \bigg) = \rho \bigg( \frac{x_i-\mu_x}{\sigma_x} \bigg)\]
The formula for the regression line is therefore this one. If there’s perfect correlation, we predict an increase that is the same number of SDs. If there’s zero correlation, then we don’t use x at all for the prediction of y. For values between 0 and 1, the prediction is somewhere in between. If the correlation is negative, we predict a reduction, instead of an increase. It is because when the correlation is positive but lower than the one, that we predict something closer to the mean, that we call this regression. The son regresses to the average height. In fact, the title of Galton’s paper was “Regression Towards Mediocrity in Hereditary Stature.”
\[y=b+mx\\slope~(m)=\rho\frac{\sigma_y}{\sigma_x} \\intercept~(b)=\mu_y-m\mu_x \]
Note that if we write this in the standard form of a line, y equals b plus mx, where b is the intercept and m is the slope, the regression line has slope rho times sigma y, divided by sigma x, and intercept mu y, minus mu x, times the slope. So if we standardize the variable so they have average 0 and standard deviation 1. Then the regression line has intercept 0 and slope equal to the correlation rho. Let’s look at the original data, father son data, and add the regression line. We can compute the intercept and the slope using the formulas we just derived. Here’s a code to make the plot with the regression line. If we plot the data in standard units, then, as we discussed, the regression line as intercept 0 and slope rho. Here’s the code to make that plot. We started this discussion by saying that we wanted to use the conditional means to predict the heights of the sons. But then we realized that there were very few data points in each strata. When we did this approximation of rounding off the height of the fathers, we found that these conditional means appear to follow a line. And we ended up with the regression line. So the regression line gives us the prediction. An advantage of using the regression line is that we used all the data to estimate just two parameters, the slope and the intercept. This makes it much more stable. When we do conditional means, we had fewer data points, which made the estimates have a large standard error, and therefore be unstable. So this is going to give us a much more stable prediction using the regression line. However, are we justified in using the regression line to predict? Galton gives us the answer.
Look at the figure below. The slope of the regression line in this figure is equal to what, in words?
Slope = (correlation coefficient of son and father heights) * (standard deviation of sons’ heights / standard deviation of fathers’ heights)
Slope = (correlation coefficient of son and father heights) * (standard deviation of fathers’ heights / standard deviation of sons’ heights)
Slope = (correlation coefficient of son and father heights) / (standard deviation of sons’ heights * standard deviation of fathers’ heights)
Slope = (mean height of fathers) - (correlation coefficient of son and father heights * mean height of sons).
Why does the regression line simplify to a line with intercept zero and slope \(\rho\) when we standardize our x and y variables? Try the simplification on your own first!
When we standardize variables, both x and y will have a mean of one and a standard deviation of zero. When you substitute this into the formula for the regression line, the terms cancel out until we have the following equation: \(y_i~=~\rho*x_i\)
When we standardize variables, both x and y will have a mean of zero and a standard deviation of one. When you substitute this into the formula for the regression line, the terms cancel out until we have the following equation: \(y_i~=~\rho*x_i\)
When we standardize variables, both x and y will have a mean of zero and a standard deviation of one. When you substitute this into the formula for the regression line, the terms cancel out until we have the following equation: \(y_i~=~\rho+x_i\)
What is a limitation of calculating conditional means?
Select ALL that apply.
Each stratum we condition on (e.g., a specific father’s height) may not have many data points.
Because there are limited data points for each stratum, our average values have large standard errors.
Conditional means are less stable than a regression line.
Conditional means are a useful theoretical tool but cannot be calculated.
Correlation and the regression line are widely used summary statistics. But it is often misused or misinterpreted. Ascombe’s example provided example of data sets in which summarizing with a correlation would be a mistake. But we also see it in the media and in scientific literature as well. The main way we motivate the use of correlation involve what is called the bivariate normal distribution. When a pair of random variables is approximated by a bivariate normal distribution, the scatterplot looks like ovals, like American footballs. They can be thin. That’s when they have high correlation. All the way up to a circle shape when they have no correlation. We saw some examples previously. Here they are again. A more technical way to define the bivariate normal distribution is the following. First, this distribution is defined for pairs. So we have two variables, x and y. And they have paired values. They are going to be bivariate normally distributed if the following happens. If x is a normally distributed random variable, and y is also a normally distributed random variable– and for any grouping of x that we can define, say, with x being equal to some predetermined value, which we call here in this formula little x – then the y’s in that group are approximately normal as well. If this happens, then the pair is approximately bivariate normal. When we fix x in this way, we then refer to the resulting distribution of the y’s in the group – defined by setting x in this way – as the conditional distribution of y given x. \[f_{Y|X=x}~is~the~conditional~distribution\\and~E(Y|X=x)~is~the~conditional~expected~value\]
We write the notation like this for the conditional distribution and the conditional expectation. If we think the height data is well-approximated by the bivariate normal distribution, then we should see the normal approximation hold for each grouping. Here, we stratify the son height by the standardized father heights and see that the assumption appears to hold.
data("GaltonFamilies")
# Prepare the data set with the heights of fathers and the first son
galton_heights <- GaltonFamilies %>%
filter(childNum == 1 & gender == 'male') %>%
select(father, childHeight) %>%
rename(son = childHeight)
# bivariate normal distribution
galton_heights %>%
mutate(z_father = round((father - mean(father)) / sd(father))) %>%
filter(z_father %in% -2:2) %>%
ggplot() +
stat_qq(aes(sample = son)) +
facet_wrap(~z_father)
Here’s the code that gives us the desired plot.
\[E(Y|X=x)~=~\mu_x~+~\rho~*~\frac{X~-~\mu_x}{\sigma_x}~*~\sigma_y\]
Now, we come back to defining correlation. Galton showed – using mathematical statistics – that when two variables follow a bivariate normal distribution, then for any given x the expected value of the y in pairs for which x is set at that value is mu y plus rho x minus mu x divided by sigma x times sigma y.
\[\frac{E(Y|X=x)~-~\mu_y}{\sigma_x}~=~\rho~*~\frac{x~-~\mu_x}{\sigma_x}\]
Note that this is a line with slope rho times sigma y divided by sigma x and intercept mu y minus n times mu x. And therefore, this is the same as the regression line we saw in a previous video. That can be written like this. So in summary, if our data is approximately bivariate, then the conditional expectation – which is the best prediction for y given that we know the value of x – is given by the regression line.
A regression line is the best prediction of Y given we know the value of X when:
X and Y follow a bivariate normal distribution.
Both X and Y are normally distributed.
Both X and Y have been standardized.
There are at least 25 X-Y pairs.
The theory we’ve been describing also tells us that the standard deviation of the conditional distribution that we described in a previous video is:
\[Var(Y|X=x)~=~\sigma_y~*~\sqrt{1~-~\rho^2} \]
This is where statements like x explains such and such percent of the variation in y comes from. Note that the variance of y is sigma squared. That’s where we start. If we condition on x, then the variance goes down to 1 minus rho squared times sigma squared y. So from there, we can compute how much the variance has gone down. It has gone down by rho squared times 100%. So the correlation and the amount of variance explained are related to each other. But it is important to remember that the variance explained statement only makes sense when the data is approximated by a bivariate normal distribution.
We previously calculated that the correlation coefficient \(\rho\) between fathers’ and sons’ heights is 0.5.
Given this, what percent of the variation in sons’ heights is explained by fathers’ heights?
0%
25%
50%
75%
When two variables follow a bivariate normal distribution, the variation explained can be calculated as: \[\rho^2~*~100\]
We computed a regression line to predict the son’s height from the father’s height. We used these calculations to get the slope and the intercept.
data("GaltonFamilies")
# Prepare the data set with the heights of fathers and the first son
galton_heights <- GaltonFamilies %>%
filter(childNum == 1 & gender == 'male') %>%
select(father, childHeight) %>%
rename(son = childHeight)
# Caluclate the means, standard deviations, correlation coefficient, intercept, and the slope
mu_x <- mean(galton_heights$father)
mu_y <- mean(galton_heights$son)
s_x <- sd(galton_heights$father)
s_y <- sd(galton_heights$son)
r <- cor(galton_heights$father, galton_heights$son)
m <- r * (s_y / s_x)
b <- mu_y - (m * mu_x)
\[E(Y|X=x)~=~b+mx~=~35,7 + 0,5x\]
This gives us the function that the conditional expectation of y given x is 35.7 plus 0.5 times x. So, what if we wanted to predict the father’s height based on the son’s?
\[x~\ne~\frac{\{E(Y|X=x)~-~b\}}{0,5}\]
It is important to know that this is not determined by computing the inverse function of what we just saw, which would be this equation here. We need to compute the expected value of x given y.
\[E(X|Y=y)~=~b+mx~=~34,0 + 0,5y\]
This gives us another regression function altogether, with slope and intercept computed like this. So now we get that the expected value of x given y, or the expected value of the father’s height given the son’s height, is equal to 34 plus 0.5 y, a different regression line.
So in summary, it’s important to remember that the regression line comes from computing expectations, and these give you two different lines, depending on if you compute the expectation of y given x or x given y.
data("GaltonFamilies")
# Prepare the data set with the heights of fathers and the first son
galton_heights <- GaltonFamilies %>%
filter(childNum == 1 & gender == 'male') %>%
select(father, childHeight) %>%
rename(son = childHeight)
# Caluclate the means, standard deviations, and correlation coefficient
mu_x <- mean(galton_heights$father)
mu_y <- mean(galton_heights$son)
s_x <- sd(galton_heights$father)
s_y <- sd(galton_heights$son)
r <- cor(galton_heights$father, galton_heights$son)
# Intercept and slope of the father, given the son
m_y <- r * (s_y / s_x)
b_y <- mu_y - (m_y * mu_x)
# Intercept and slope of the son, given the father
m_x <- r * (s_x / s_y)
b_x <- mu_x - (m_x * mu_y)
# Plot
galton_heights %>%
ggplot(aes(father, son)) +
geom_point(alpha = 0.5) +
geom_abline(color='blue', intercept = b_y, slope = m_y) + # Father
geom_abline(color='green', intercept = b_x, slope = m_x) # Son
In the Linear Models section, you will learn how to do linear regression.
After completing this section, you will be able to:
This section has four parts: Introduction to Linear Models, Least Squares Estimates, Tibbles, do, and broom, and Regression and Baseball. There are comprehension checks that follow most videos.
In a previous video, we found that the slope of the regression line for predicting runs from bases on balls was 0.735.
So, does this mean that if we go and higher low salary players with many bases on balls that increases the number of walks per game by 2 for our team? Our team will score 1.47 more runs per game? We are again reminded that ASSOCIATION IS NOT CAUSATION. The data does provide strong evidence that a team with 2 more bases on balls per game than the average team scores 1.47 more runs per game, but this does not mean that bases on balls are the cause. If we do compute the regression line slope for singles, we get 0.449, a lower value. Note that a single gets you to first base just like a base on ball. Those that know a little bit more about baseball will tell you that with a single, runners that are on base have a better chance of scoring than with a base on balls. So, how can base on balls be more predictive of runs? The reason this happens is because of confounding.
data("Teams")
Teams %>%
filter(yearID %in% 1961:2001) %>%
mutate(Singles = ( H - HR - X2B - X3B) / G, BB = BB / G, HR = HR / G) %>%
summarize(cor(BB, HR), cor(Singles, HR), cor(BB, Singles))
## cor(BB, HR) cor(Singles, HR) cor(BB, Singles)
## 1 0.4039125 -0.1738404 -0.05605071
Note the correlation between homeruns, bases on balls, and singles. We see that the correlation between bases on balls and homeruns is quite high compared to the other two pairs. It turns out that pitchers, afraid of homeruns, will sometimes avoid throwing strikes to homerun hitters. As a result, homerun hitters tend to have more bases on balls. Thus, a team with many homeruns will also have more bases on balls than average, and as a result, it may appear that bases on balls cause runs. But it is actually the homeruns that caused the runs. In this case, we say that bases on balls are confounded with homeruns. But could it be that bases on balls still help? To find out, we somehow have to adjust for the homerun effect. Regression can help with this.
Why is the number of home runs considered a confounder of the relationship between bases on balls and runs per game?
Home runs is not a confounder of this relationship.
Home runs are the primary cause of runs per game.
The correlation between home runs and runs per game is stronger than the correlation between bases on balls and runs per game.
Players who get more bases on balls also tend to have more home runs; in addition, home runs increase the points per game.
To try to determine if bases on balls is still useful for creating runs, a first approach is to keep home runs fixed at a certain value and then examine the relationship between runs and bases on balls. As we did when we stratified fathers by rounding to the closest inch, here, we can stratify home runs per game to the closest 10th.
data("Teams")
dat <- Teams %>%
filter(yearID %in% 1961:2001) %>%
mutate(HR_strata = round(HR / G, 1),
BB_per_game = BB / G,
R_per_game = R / G) %>%
filter(HR_strata >= 0.4 & HR_strata <= 1.2)
dat %>%
ggplot(aes(BB_per_game, R_per_game)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
facet_wrap(~HR_strata)
We filtered our strata with few points. We use this code to generate an informative data set. And then, we can make a scatter plot for each strata. A scatterplot of runs versus bases on balls.This is what it looks like. Remember that the regression slope for predicting runs with bases on balls when we ignore home runs was 0.735. But once we stratify by home runs, these slopes are substantially reduced.
dat %>%
group_by(HR_strata) %>%
summarize(slope = cor(BB_per_game, R_per_game) * sd(R_per_game) / sd(BB_per_game))
## # A tibble: 9 x 2
## HR_strata slope
## <dbl> <dbl>
## 1 0.4 0.734
## 2 0.5 0.566
## 3 0.6 0.412
## 4 0.7 0.285
## 5 0.8 0.365
## 6 0.9 0.261
## 7 1 0.511
## 8 1.1 0.454
## 9 1.2 0.440
We can actually see what the slopes are by using this code. We stratify by home run and then compute the slope using the formula that we showed you previously. These values are closer to the slope we obtained from singles, which is 0.449. Which is more consistent with our intuition. Since both singles and bases on ball get us to first base, they should have about the same predictive power. Now, although our understanding of the application – our understanding of baseball – tells us that home runs cause bases on balls and not the other way around, we can still check if, after stratifying by base on balls, we still see a home run effect or if it goes down.
dat_1 <- Teams %>%
filter(yearID %in% 1961:2001) %>%
mutate(BB_strata = round(BB / G, 1),
HR_per_game = HR / G,
R_per_game = R / G) %>%
filter(BB_strata >= 2.8 & BB_strata <= 3.9)
dat_1 %>%
ggplot(aes(HR_per_game, R_per_game)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
facet_wrap(~BB_strata)
dat_1 %>%
group_by(BB_strata) %>%
summarize(slope = cor(HR_per_game, R_per_game) * sd(R_per_game) / sd(HR_per_game))
## # A tibble: 12 x 2
## BB_strata slope
## <dbl> <dbl>
## 1 2.8 1.53
## 2 2.9 1.57
## 3 3 1.52
## 4 3.1 1.49
## 5 3.2 1.58
## 6 3.3 1.56
## 7 3.4 1.48
## 8 3.5 1.63
## 9 3.6 1.83
## 10 3.7 1.45
## 11 3.8 1.70
## 12 3.9 1.30
We use the same code that we just used for bases on balls. But now, we swap home runs for bases on balls to get this plot. In this case, the slopes are the following. You can see they are all around 1.5, 1.6, 1.7. So they do not change that much from the original slope estimate, which was 1.84. Regardless, it seems that if we stratify by home runs, we have an approximately bivariate normal distribution for runs versus bases on balls. Similarly, if we stratify by bases on balls, we have an approximately normal bivariate distribution for runs versus home runs. So what do we do? It is somewhat complex to be computing regression lines for each strata.
\[E[R~|~BB=x_1,~HR=x_2]~=~\beta_0~+~\beta_1(x_2)x_1~+~\beta(x_1)x_2 \]
We’re essentially fitting this model that you can see in this equation with the slopes for x1 changing for different values of x2 and vice versa. Here, x1 is bases on balls. And x2 are home runs. Is there an easier approach? Note that if we take random variability into account, the estimated slopes by strata don’t appear to change that much. If these slopes are in fact the same, this implies that this function beta 1 of x2 and the other function beta 2 of x1 are actually constant.
\[E[R~|~BB=x_1,~HR=x_2]~=~\beta_0~+~\beta_1x_1~+~\beta_2x_2 \]
Which, in turn, implies that the expectation of runs condition on home runs and bases on balls can be written in this simpler model. This model implies that if the number of home runs is fixed, we observe a linear relationship between runs and bases on balls. And that the slope of that relationship does not depend on the number of home runs. Only the slope changes as the home runs increase. The same is true if we swap home runs and bases on balls. In this analysis, referred to as multivariate regression, we say that the bases on balls slope beta 1 is adjusted for the home run effect. If this model is correct, then confounding has been accounted for. But how do we estimate beta 1 and beta 2 from the data? For this, we’ll learn about linear models and least squares estimates.
As described in the video, when we stratified our regression lines for runs per game vs. bases on balls by the number of home runs, what happened?
The slope of runs per game vs. bases on balls within each stratum was reduced because we removed confounding by home runs.
The slope of runs per game vs. bases on balls within each stratum was reduced because there were fewer data points.
The slope of runs per game vs. bases on balls within each stratum increased after we removed confounding by home runs.
The slope of runs per game vs. bases on balls within each stratum stayed about the same as the original slope.
Since Galton’s original development, regression has become one of the most widely used tools in data science. One reason for this has to do with the fact that regression permits us to find relationships between two variables while adjusting for others, as we have just shown for bases on balls and home runs. This has been particularly popular in fields where randomized experiments are hard to run, such as economics and epidemiology. When we’re not able to randomly assign each individual to a treatment or control group, confounding is particularly prevalent. For example, consider estimating the effect of any fast foods on life expectancy using data collected from a random sample of people in some jurisdiction. Fast food consumers are more likely to be smokers, drinkers, and have lower incomes. Therefore, a naive regression model may lead to an overestimate of a negative health effect of fast foods. So how do we adjust for confounding in practice?We can use regression. We have described how, if data is bivariate normal, then the conditional expectation follow a regression line, that the conditional expectation as a line is not an extra assumption, but rather a result derived from the assumption, that they are approximately bivariate normal. However, in practice it is common to explicitly write down a model that describes the relationship between two or more variables using what is called a linear model. We know that linear here does not refer to lines exclusively, but rather to the fact that the conditional expectation is a linear combination of known quantities. Any combination that multiplies them by a constant and then adds them up with, perhaps, a shift.
\[ 2~+~3x~-~4y~+~5z\]
For example, 2 plus 3x minus 4y plus 5z is a linear combination of x, y, and z.
\[\beta_0~+~\beta_1x_1~+~\beta_2x_2\]
So beta 0 plus beta 1x1, plus beta 2x 2 is a linear combination of x1 and x2. The simplest linear model is a constant beta 0. The second simplest is a line, beta 0 plus beta 1x. For Galton’s data, we would denote and observe fathers’ heights with x1 through xn.
\[Y_i~=~\beta_0~+~\beta_1x_i~+~\epsilon_i,~i~=~1,\dots,N\]
Then we model end son heights we are trying to predict with the following model. Here, the little xi’s are the father’s heights, which are fixed not random, due to the conditioning. We’ve conditioned on these values. And then Yi is the random son’s height that we want to predict. We further assume that the errors that are denoted with the Greek letter for E, epsilon, epsilon i, are independent from each other, have expected value 0, and the standard deviation, which is usually called sigma, does not depend on i. It’s the same for every individual. We know the xi, but to have a useful model for prediction, we need beta 0 and beta 1. We estimate these from the data. Once we do, we can predict the sons’ heights from any father’s height, x. Note that if we further assume that the epsilons are normally distributed, then this model is exactly the same one we derived earlier for the bivariate normal distribution. A somewhat nuanced difference is that in the first approach, we assumed the data was a bivariate normal, and the linear model was derived, not assumed. In practice, linear models are just assumed without necessarily assuming normality. The distribution of the epsilons is not specified. But nevertheless, if your data is bivariate normal, the linear model that we just showed holds. If your data is not bivariate normal, then you will need to have other ways of justifying the model. One reason linear models are popular is that they are interpretable. In the case of Galton’s data, we can interpret the data like this. Due to inherited genes, the son’s height prediction grows by beta 1 for each inch we increase the father’s height x. Because not all sons with fathers of height x are of equal height, we need the term epsilon, which explains the remaining variability. This remaining variability includes the mother’s genetic effect, environmental factors, and other biological randomness. Note that given how we wrote the model, the intercept beta 0 is not very interpretable, as it is the predicted height of a son with a father with no height. Due to regression to the mean, the prediction will usually be a bit larger than 0, which is really not very interpretable. To make the intercept parameter more interpretable, we can rewrite the model slightly in the following way.
\[Y_i~=~\beta_0~+~\beta_1(x_i~-~\bar{x})~+~\epsilon_i,~i~=~1,\dots,N\]
Here, we have changed xi to xi minus the average height x bar. We have centered our covariate xi. In this case, beta 0, the intercept, would be the predicted height for the average father for the case where xi equals x bar
For linear models to be useful, we have to estimate the unknown parameters, the betas. The standard approach in science is to find the values that minimize the distance of the fitted model to the data. To quantify this, we use the least squares equation. For Galton’s data, we would write something like this.
\[RSS~=~\sum_{i=1}^{n}\{Y_i-(\beta_0+\beta_1x_i)\}^2\]
This quantity is called the Residual Sum of Squares, RSS. Once we find the values that minimize the RSS, we call the values the Least Squares Estimate, LSE , and denote them, in this case, with \(\hat\beta_0\) hat and \(\hat\beta_1\). Let’s write the function that computes the RSS for any pair of values, beta 0 and beta 1, for our heights data.
data("GaltonFamilies")
# Prepare the data set with the heights of fathers and the first son
galton_heights <- GaltonFamilies %>%
filter(childNum == 1 & gender == 'male') %>%
select(father, childHeight) %>%
rename(son = childHeight)
# RSS-function
rss <- function(beta0, beta1, data){
resid <- galton_heights$son - (beta0 + beta1 * galton_heights$father)
return(sum(resid^2))
}
It would look like this. So for any pair of values, we get an RSS. So this is a three-dimensional plot with beta 1 and beta 2, and x and y and the RSS as a z. To find the minimum, you would have to look at this three-dimensional plot. Here, we’re just going to make a two-dimensional version by keeping beta 0 fixed at 25. So it will be a function of the RSS as a function of beta 1.
# Linear Model - Least Square Estimates
beta1 = seq(0, 1, len=nrow(galton_heights))
results <- data.frame(beta1 = beta1,
rss = sapply(beta1, rss, beta0 = 25))
results %>% ggplot(aes(beta1, rss)) + geom_line() +
geom_line(aes(beta1, rss), col=2)
We can use this code to produce this plot. We can see a clear minimum for beta 1 at around 0.65. So you could see how we would pick the least squares estimates. However, this minimum is for beta 1 when beta 0 is fixed at 25. But we don’t know if that’s the minimum for beta 0. We don’t know if 25 komma 0.65 minimizes the equation across all pairs. We could use trial and error, but it’s really not going to work here. Instead we will use calculus. We’ll take the partial derivatives, set them equal to 0, and solve for beta 1 and beta 0. Of course, if we have many parameters, these equations can get rather complex. But there are functions in R that do these calculations for us. We will learn these soon. To learn the mathematics behind this, you can consult the book on linear models.
In r, we can obtain the least squares estimates using the lm function.
\[Y_i~=~\beta_0~+~\beta_1x_i~+~\epsilon_i\]
To fit the following model where Yi is the son’s height and Xi is the father height, we would write the following piece of code.
# LSE
fit <- lm(son ~ father, data = galton_heights)
fit
##
## Call:
## lm(formula = son ~ father, data = galton_heights)
##
## Coefficients:
## (Intercept) father
## 35.7125 0.5028
This gives us the least squares estimates, which we can see in the output of r. The general way we use lm is by using the tilde character to let lm know which is the value we’re predicting that’s on the left side of the tilde, and which variables we’re using to predict– those will be on the right side of the tilde. The intercept is added automatically to the model. So you don’t have to include it when you write it. The object fit that we just computed includes more information about the least squares fit. We can use the function summary to extract more of this information, like this. To understand some of the information included in this summary, we need to remember that the LSE are random variables. Mathematical statistics gives us some ideas of the distribution of these random variables. And we’ll learn some of that next.
#Question 1
data("Teams")
data_teams <- Teams %>%
filter(yearID %in% 1961:2001) %>%
mutate(R_per_game = R / G, BB_per_game = BB / G, HR_per_game = HR / G)
fit <-lm(R_per_game ~ BB_per_game + HR_per_game, data = data_teams)
fit
##
## Call:
## lm(formula = R_per_game ~ BB_per_game + HR_per_game, data = data_teams)
##
## Coefficients:
## (Intercept) BB_per_game HR_per_game
## 1.7444 0.3874 1.5611
The LSE are derived from the data, Y1 through Yn, which are random. This implies that our estimates are random variables. To see this, we can run a Monte Carlo simulation in which we assume that the son and father height data that we have defines an entire population. And we’re going to take random samples of size 50 and compute the regression slope coefficient for each one.
data("GaltonFamilies")
# Prepare the data set with the heights of fathers and the first son
galton_heights <- GaltonFamilies %>%
filter(childNum == 1 & gender == 'male') %>%
select(father, childHeight) %>%
rename(son = childHeight)
# Monte-Carlo simulation and plotting them
B <- 1000
N <- 50
lse <- replicate(B,{
sample_n(galton_heights, N, replace = TRUE) %>%
lm(son~father, data = .) %>% .$coef
})
lse <- data.frame(beta_0 = lse[1,], beta_1 = lse[2,])
p1 <- lse %>%
ggplot(aes(beta_0)) +
geom_histogram(binwidth = 5, color = "black")
p2 <- lse %>%
ggplot(aes(beta_1)) +
geom_histogram(binwidth = 0.1, color = "black")
grid.arrange(p1, p2, ncol = 2)
We write this code, which gives us several estimates of the regression slope. We can see the variability of the estimates by plotting their distribution. Here you can see the histograms of the estimated beta 0’s and the estimated beta 1’s. The reason these look normal is because the central limit theorem applies here as well.For large enough N, the least squares estimates will be approximately normal with expected value beta 0 and beta 1 respectively. The standard errors are a bit complicated to compute, but mathematical theory does allow us to compute them, and they are included in the summary provided by the lm-function.
# Standard error for one simulated data set
sample_n(galton_heights, N, replace = TRUE) %>%
lm(son~father, data = .) %>% summary
##
## Call:
## lm(formula = son ~ father, data = .)
##
## Residuals:
## Min 1Q Median 3Q Max
## -5.0535 -1.5535 0.0602 1.2818 4.2374
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 28.8737 10.4732 2.757 0.008227 **
## father 0.5909 0.1521 3.884 0.000314 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 2.204 on 48 degrees of freedom
## Multiple R-squared: 0.2392, Adjusted R-squared: 0.2233
## F-statistic: 15.09 on 1 and 48 DF, p-value: 0.0003136
Here are the estimated standard errors for one of our simulated data sets. You could see them at the second column in the coefficients table.
# Estimated standard errors for the lse-data set(see above)
lse %>% summarize(se_0 = sd(beta_0), se_1 = sd(beta_1))
## se_0 se_1
## 1 8.980321 0.1293344
You can see that the standard errors estimates reported by the summary function are closed, so the standard errors that we obtain from our Monte Carlo simulation. The summary function also reports t-statistics –this is the t value column– and p-values. This is the Pr bigger than absolute value of t column. The t-statistic is not actually based on the central limit theorem, but rather on the assumption that the epsilons follow a normal distribution. Under this assumption, mathematical theory tells us that the LSE divided by their standard error, which we can see here and here, follow a t distribution with N minus p degrees of freedom, with p the number of parameters in our model, which in this case is 2. The 2p values are testing the null hypothesis that beta 0 is 0 and beta 1 is 0 respectively. Note that as we described previously, for large enough N, the central limit works, and the t distribution becomes almost the same as a normal distribution. So if either you assume the errors are normal and use the t distribution or if you assume that N is large enough to use the central limit theorem, you can construct confidence intervals for your parameters. We know here that although we will not show examples in this video, hypothesis testing for regression models is very commonly used in, for example, epidemiology and economics, to make statements such as the effect of A and B was statistically significant after adjusting for X, Y, and Z. But it’s very important to note that several assumptions – we just described some of them– have to hold for these statements to hold.